202 research outputs found
HapticLever: Kinematic Force Feedback using a 3D Pantograph
HapticLever is a new kinematic approach for VR haptics which uses a 3D
pantograph to stiffly render large-scale surfaces using small-scale proxies.
The HapticLever approach does not consume power to render forces, but rather
puts a mechanical constraint on the end effector using a small-scale proxy
surface. The HapticLever approach provides stiff force feedback when the user
interacts with a static virtual surface, but allows the user to move their arm
freely when moving through free virtual space. We present the problem space,
the related work, and the HapticLever design approach.Comment: UIST 2022 Poste
Emotion-mapped Robotic Facial Expressions based on Philosophical Theories of Vagueness
As the field of robotics matures robots will need some method of displaying and modeling emotions. One way of doing this is to use a human-like face on which the robot can make facial expressions corresponding to its emotional state. Yet the connection between a robot s emotional state and its physical facial expression is not an obvious one: while a smile can gradually increase or decrease in size, there is no principled method of using boolean logic to map changes in facial expressions to changes in emotional states. We give a philosophical analysis of the problem and show that it is rooted in the vagueness of robot emotions. We then outline several methods that have been used in the philosophical literature to model vagueness and propose an experiment that uses our humanoid robot head to determine which philosophical theory is best suited to the task
Situated messages for asynchronous human-robot interaction
An ongoing issue in human robot interaction (HRI) is how people and robots communicate with one another. While there is considerable work in real-time human-robot communication, fairly little has been done in asynchronous realm. Our approach, which we call situated messages, lets humans and robots asynchronously exchange information by placing physical tokens – each representing a simple message – in meaningful physical locations of their shared environment. Using knowledge of the robot’s routines, a person can place a message token at a location, where the location is typically relevant to redirecting the robot’s behavior at that location. When the robot passes nearby that location, it detects the message and reacts accordingly. Similarly, robots can themselves place tokens at specific locations for people to read. Thus situated messages leverages embodied interaction, where token placement exploits the everyday practices and routines of both people and robots. We describe our working prototype, introduce application scenarios, explore message categories and usage patterns, and suggest future directions
Build Notifications in Agile Environments
In an agile software development environment, developers write code
that should work together to fulfill the wishes of the customer. Continuous
integration (CI) ensures that code from different individuals integrates
properly. CI compiles the entire codebase, deploys and tests it with each
change. CI alerts developers of any problems as errors can be fixed more
easily if caught earlier in the process. This paper compares the
effectiveness of different types of mechanisms for notifying developers to a
successful or unsuccessful build. Two different quantitative and qualitative
user studies were performed testing the effectiveness of three types of
notification devices one virtual e-mail based mechanism, one using ambient
lava lamps, and one robotic device. The results show that most developers
preferred an easily visible but unobtrusive ambient device combined with an
e-mail describing the problem in more detail.We are currently acquiring citations for the work deposited into this collection. We recognize the distribution rights of this item may have been assigned to another entity, other than the author(s) of the work.If you can provide the citation for this work or you think you own the distribution rights to this work please contact the Institutional Repository Administrator at [email protected]
Interacting with microseismic visualizations
Microseismic visualization systems present complex 3D
data of small seismic events within oil reservoirs to
allow experts to explore and interact with that data. Yet
existing systems suffer several problems: 3D spatial
navigation and orientation is difficult, and selecting 3D
data is challenging due to the problems of occlusion
and lack of depth perception. Our work mitigates these
problems by applying both proxemic interactions and a
spatial input device to simplify how experts navigate
through the visualization, and a painting metaphor to
simplify how they select that information.N
Test Bed for Human-Robot Interaction
This paper presents a dynamic experimental test bed for exploring and evaluating human-robot interaction (HRI). Our system is designed around the concept of playing board games involving collaboration between humans and robots in a shared physical environment. Unlike the classic human-versus-machine situation often established in computer-based board games, our test bed takes advantage of the rich interaction opportunities that arise when humans and robots play collaboratively as a team. To facilitate interaction within a shared physical environment, our game is played on a large checkerboard where human and robotic players can be situated and play as game pieces within the game. With meaningful interaction occurring within our confined setup, various aspects of human-robot interaction can be easily explored and evaluated such as interface methods. We also present the results of a user evaluation which shows the sensitivity of our system in assessing robotic behaviours
From the Desktop to the Tabletop: Bringing Virtual Games into the Physical World
Realism has become the watchword for modern games games now
boast realistic lighting effects, realistic physics, realistic animation
systems and realistic AI. Realism is highly sought, simply because a realistic
game is a compelling one; as games become more and more indistinguishable
from reality, gamers are more willing to suspend disbelief, and to lose
themselves in the game world. When the idea of realistic gaming is taken to
the logical extreme, one tends to imagine something akin to holodeck from
Star Trek a perfect immersive experience which could reproduce the sight,
sound, and touch of any scenario. Why is it that modern games fall short of
the perfect immersion provided by the holodeck? Graphics are the most
obvious discrepancy. Admittedly, the imagery produced by the holodeck is much
better than anything we can reasonably create today. But if modern games were
to attain perfect graphical photorealism, would this alone create a
holodeck-like experience for the player? Obviously not; a key component of
the experience is still missing namely, a sense of physical immersion.
Although modern games are more realistic than ever before there exist two
constraints which have served to limit the player s sense of immersion since
the advent of videogaming itself. First, in conventional gaming action is
constrained to a flat display space, usually a video monitor. The game
experience is confined to a screen, seldom going beyond and engaging the
player in the external, physical world. Second, the player s ability to
interact with the game is usually tied to an arbitrary input device such as a
mouse, keyboard or game pad. The player is never free to act on the game
world directly. Instead, he or she must issue all commands through this
intermediary device. It is our belief that mixed reality a field of research
which attempts to integrate virtual entities into a user s physical
environment can address both of these problems simultaneously. Due largely
to advances in hardware and software, implementing mixed reality is now more
accessible and affordable than ever before. In this paper we attempt to
illustrate the potential that mixed reality has for gaming and through a
simple videogame implementation, Save Em, demonstrate how easily this
technique can be applied.We are currently acquiring citations for the work deposited into this collection. We recognize the distribution rights of this item may have been assigned to another entity, other than the author(s) of the work.If you can provide the citation for this work or you think you own the distribution rights to this work please contact the Institutional Repository Administrator at [email protected]
Sheep and Wolves Testbed for Interaction and Collaboration between Humans and Robots
This paper presents the first prototype of Sheep and Wolves, a
system for testing interaction and collaboration paradigms between humans and
robots. The paper contributions are twofold: a mixed reality interface for
human-robot interaction, and a practical experimental tool for assessing how
different robotic behavioral patterns affect interaction and collaboration
with users. Sheep and Wolves places humans, robots and virtual entities in a
game environment where they have to collaborate and compete. The system is
designed around the classic Sheep and Wolves board game, played on a large
physical checkerboard. In the prototype presented here the user is playing a
single wolf in a pack of four autonomous robotic wolves trying to hunt a
single virtual sheep. The human interacts with the rest of the wolf pack using
a mixed reality video stream, a graphical interface and a text chat tool that
enables discussion and planning of future moves within the pack. In
preliminary testing Sheep and wolves was sensitive to differences in the
robots behavioral patterns and suggested that robotic assertiveness (or
robotic chutzpah) might enhance the quality and trustfulness of the
interaction.We are currently acquiring citations for the work deposited into this collection. We recognize the distribution rights of this item may have been assigned to another entity, other than the author(s) of the work.If you can provide the citation for this work or you think you own the distribution rights to this work please contact the Institutional Repository Administrator at [email protected]
- …